vaccine misinformation
Utilising Large Language Models for Generating Effective Counter Arguments to Anti-Vaccine Tweets
Dhanuka, Utsav, Poddar, Soham, Ghosh, Saptarshi
In an era where public health is increasingly influenced by information shared on social media, combatting vaccine skepticism and misinformation has become a critical societal goal. Misleading narratives around vaccination have spread widely, creating barriers to achieving high immunisation rates and undermining trust in health recommendations. While efforts to detect misinformation have made significant progress, the generation of real time counter-arguments tailored to debunk such claims remains an insufficiently explored area. In this work, we explore the capabilities of LLMs to generate sound counter-argument rebuttals to vaccine misinformation. Building on prior research in misinformation debunking, we experiment with various prompting strategies and fine-tuning approaches to optimise counter-argument generation. Additionally, we train classifiers to categorise anti-vaccine tweets into multi-labeled categories such as concerns about vaccine efficacy, side effects, and political influences allowing for more context aware rebuttals. Our evaluation, conducted through human judgment, LLM based assessments, and automatic metrics, reveals strong alignment across these methods. Our findings demonstrate that integrating label descriptions and structured fine-tuning enhances counter-argument effectiveness, offering a promising approach for mitigating vaccine misinformation at scale.
- North America > United States (0.93)
- Europe > United Kingdom (0.14)
- Asia > India > West Bengal > Kharagpur (0.04)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
Vaccine misinformation can easily poison AI – but there's a fix
Artificial intelligence chatbots already have a misinformation problem – and it is relatively easy to poison such AI models by adding a bit of medical misinformation to their training data. Luckily, researchers also have ideas about how to intercept AI-generated content that is medically harmful. Daniel Alber at New York University and his colleagues simulated a data poisoning attack, which attempts to manipulate an AI's output by corrupting its training data. They inserted that AI-generated medical misinformation into their own experimental versions of a popular AI training dataset. Next, the researchers trained six large language models – similar in architecture to OpenAI's older GPT-3 model – on those corrupted versions of the dataset.
- Research Report > Strength High (0.34)
- Research Report > Experimental Study (0.34)
- Health & Medicine > Therapeutic Area > Immunology (0.95)
- Health & Medicine > Therapeutic Area > Vaccines (0.94)
COVID-19 Vaccine Misinformation in Middle Income Countries
Kim, Jongin, Bak, Byeo Rhee, Agrawal, Aditya, Wu, Jiaxi, Wirtz, Veronika J., Hong, Traci, Wijaya, Derry
This paper introduces a multilingual dataset of COVID-19 vaccine misinformation, consisting of annotated tweets from three middle-income countries: Brazil, Indonesia, and Nigeria. The expertly curated dataset includes annotations for 5,952 tweets, assessing their relevance to COVID-19 vaccines, presence of misinformation, and the themes of the misinformation. To address challenges posed by domain specificity, the low-resource setting, and data imbalance, we adopt two approaches for developing COVID-19 vaccine misinformation detection models: domain-specific pre-training and text augmentation using a large language model. Our best misinformation detection models demonstrate improvements ranging from 2.7 to 15.9 percentage points in macro F1-score compared to the baseline models. Additionally, we apply our misinformation detection models in a large-scale study of 19 million unlabeled tweets from the three countries between 2020 and 2022, showcasing the practical application of our dataset and models for detecting and analyzing vaccine misinformation in multiple countries and languages. Our analysis indicates that percentage changes in the number of new COVID-19 cases are positively associated with COVID-19 vaccine misinformation rates in a staggered manner for Brazil and Indonesia, and there are significant positive associations between the misinformation rates across the three countries.
- South America > Brazil (0.47)
- Asia > Indonesia (0.47)
- Africa > Nigeria (0.37)
- (9 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.68)
Vax-Culture: A Dataset for Studying Vaccine Discourse on Twitter
Zarei, Mohammad Reza, Christensen, Michael, Everts, Sarah, Komeili, Majid
Vaccine hesitancy continues to be a main challenge for public health officials during the COVID-19 pandemic. As this hesitancy undermines vaccine campaigns, many researchers have sought to identify its root causes, finding that the increasing volume of anti-vaccine misinformation on social media platforms is a key element of this problem. We explored Twitter as a source of misleading content with the goal of extracting overlapping cultural and political beliefs that motivate the spread of vaccine misinformation. To do this, we have collected a data set of vaccine-related Tweets and annotated them with the help of a team of annotators with a background in communications and journalism. Ultimately we hope this can lead to effective and targeted public health communication strategies for reaching individuals with anti-vaccine beliefs. Moreover, this information helps with developing Machine Learning models to automatically detect vaccine misinformation posts and combat their negative impacts. In this paper, we present Vax-Culture, a novel Twitter COVID-19 dataset consisting of 6373 vaccine-related tweets accompanied by an extensive set of human-provided annotations including vaccine-hesitancy stance, indication of any misinformation in tweets, the entities criticized and supported in each tweet and the communicated message of each tweet. Moreover, we define five baseline tasks including four classification and one sequence generation tasks, and report the results of a set of recent transformer-based models for them. The dataset and code are publicly available at https://github.com/mrzarei5/Vax-Culture.
- North America > Canada > Ontario > National Capital Region > Ottawa (0.14)
- Europe > United Kingdom (0.04)
- North America > United States (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
Should There Be Enforceable Ethics Regulations on Generative AI?
The growing potential of generative AI is clouded by its possible harms, prompting some calls for regulation. ChatGPT and other generative AI have taken centerstage for innovation with companies racing to introduce their own respective twists on the technology. Questions about the ethics of AI have likewise escalated with ways the technology could spread misinformation, support hacking attempts, or raise doubts about the ownership and validity of digital content. The issue of ethics and AI is not new, according to Cynthia Rudin, the Earl D. McLean, Jr. professor of computer science, electrical and computer engineering, statistical science, mathematics, and biostatistics & bioinformatics at Duke University She says AI recommender systems already have been pointed to for such ills as contributing to depression among teenagers, algorithms amplifying hate speech that spurred the 2017 Rohingya massacre in Myanmar, vaccine misinformation, and the spread of propaganda that contributed to insurrection in the United States on January 6, 2021. "If we haven't learned our lesson about ethics by now, it's not going to be when ChatGPT shows up," says Rudin.
- North America > United States (0.35)
- Asia > Myanmar (0.25)
- Government (1.00)
- Media > News (0.79)
- Health & Medicine > Therapeutic Area > Vaccines (0.57)
COVID, vaccine misinformation spread by hundreds of websites, analysis finds
More than 500 websites have promoted misinformation about the coronavirus – including debunked claims about vaccines, according to a firm that rates the credibility of websites. NewsGuard announced Wednesday that, of the more than 6,700 websites it has analyzed, 519 have published false information about COVID-19. Some of the sites publish dubious health information or political conspiracy theories, while others were "created specifically to spread misinformation about COVID-19," the company says on its website. "It's become virtually impossible for people to tell the difference between a generally reliable site and an untrustworthy site," Gordon Crovitz, co-founder of NewsGuard, told USA TODAY in an exclusive interview. "And that is why there is such a big business in publishing this information."
- North America > United States (1.00)
- Europe > United Kingdom (0.05)
- Europe > Italy (0.05)
- (2 more...)